10 research outputs found

    Analysis and Prediction of Deforming 3D Shapes using Oriented Bounding Boxes and LSTM Autoencoders

    Full text link
    For sequences of complex 3D shapes in time we present a general approach to detect patterns for their analysis and to predict the deformation by making use of structural components of the complex shape. We incorporate long short-term memory (LSTM) layers into an autoencoder to create low dimensional representations that allow the detection of patterns in the data and additionally detect the temporal dynamics in the deformation behavior. This is achieved with two decoders, one for reconstruction and one for prediction of future time steps of the sequence. In a preprocessing step the components of the studied object are converted to oriented bounding boxes which capture the impact of plastic deformation and allow reducing the dimensionality of the data describing the structure. The architecture is tested on the results of 196 car crash simulations of a model with 133 different components, where material properties are varied. In the latent representation we can detect patterns in the plastic deformation for the different components. The predicted bounding boxes give an estimate of the final simulation result and their quality is improved in comparison to different baselines

    A geometrical method for low-dimensional representations of simulations

    No full text
    We propose a new data analysis approach for the efficient postprocessing of bundles of finite element data from numerical simulations. The approach is based on the mathematical principles of symmetry. We consider the case where simulations of an industrial product are contained in the space of surface meshes embedded in R3. Furthermore, we assume that distance preserving transformations exist, albeit unknown, which map simulation to simulation. In this setting, a discrete Laplace-Beltrami operator can be constructed on the mesh, which is invariant to isometric transformations and therefore valid for all simulations. The eigenfunctions of such an operator are used as a common basis for all (isometric) simulations. One can use the projection coefficients instead of the full simulations for further analysis. To extend the idea of invariance, we employ a discrete Fokker-Planck operator, which in the continuous limit converges to an operator invariant to a nonlinear transformation, and use its eigendecomposition accordingly. The data analysis approach is applied to time-dependent datasets from numerical car crash simulations. One observes that only a few spectral coefficients are necessary to describe the data variability, and low-dimensional structures are obtained. The eigenvectors are seen to recover different independent variation modes such as translation, rotation, and global and local deformations. An effective analysis of the data from bundles of numerical simulations is made possible-in particular an analysis for many simulations in time

    Machine learning approaches for repositories of numerical simulation results

    No full text
    Simulations are used intensively in the developing process of new industrial products and have achieved a high degree of detail. In that workflow often up to thousand finite element model variants, representing different product configurations, are simulated within a few days. Currently the decision process for finding the optimal product parameters involves the comparative evaluation of large finite element simulation bundles by post-processing each one of those results using 3D visualization software. This time consuming process creates a severe bottleneck in the product design and evaluation workflow

    A knowledge-based surrogate modeling approach for cup drawing with limited data

    No full text
    To predict the quality of a process outcome with given process parameters in real-time, surrogate models are often adopted. A surrogate model is a statistical model that interpolates between data points obtained either by process measurements or deterministic models of the process. However, in manufacturing processes the amount of useful data is often limited, and therefore setting up a sufficiently accurate surrogate model is challenging. The present contribution shows how to handle limited data in a surrogate modeling approach using the example of a cup drawing process. The purpose of the surrogate model is to classify the quality of the drawn cup and to predict its final geometry. These classification and regression tasks are solved via machine learning methods. The training data is sam pled on a relatively wide range varying three parameters of a finite element simulation, namely sheet metal thickness, blank holder force, and friction. The geometrical features of the cup are extracted using domain knowledge. Besides this knowledge-based approach, an outlook is given for a data-driven surrogate modeling approach

    Compression of NVH simulation results: Presentation held at NAFEMS World Congress (NWC), Crete, Greece, 16th and 19th of June 2009

    No full text
    Crash specific data compression has become a standard technology, which is widely used in automotive industry. This paper describes lossy data compression of NASTRAN-OP2 files as they emerge from NVH simulations in the automotive sector. The largest part of these files contains real or complex eigenmodes but also element energies and stress or strain tables. During the automotive design process a large number of simulations is performed. As a result huge amounts of data have to be stored. Using data compression the size of the files can be significantly reduced with less storage space required. The reduction in file size resulting from the compression also leads consequently to faster file transfer and I/O-times of post processing tools, which are able to read compressed files via a specific library. Therefore, efficient compression of OP2-files means a large benefit for NASTRAN users. To achieve very high compression ratios a lossy data compression scheme was chosen. During compression floating point data is quantized using a user controlled precision. This allows the user to meet his requirements as a tradeoff between the magnitude of errors and the compression factor. To achieve the best com-pression ratio at a given precision the entropy of the data is reduced further by using specially designed prediction algorithms. The technologies involved exploit the shell and solid element connectivity information derived from the OP2 files. To accomplish seamless workflow integration the compressed data must be fast and easily accessible. Therefore the HDF5 format was chosen as output file format. The advantages of the HDF5 file format will be exploited during decompression. Easy and fast data access facilitates the parallelization of the decompression using OpenMP. This is especially interesting for post processing tools, which directly read OP2 data from the compressed files. The software FEMZIP-N which makes use of all these features showed very satisfying compression results on NVH simulations conducted by different automotive companies. Compression ratios between 5.0 and 40.0 have been achieved
    corecore